846 research outputs found

    New bounds on the Lieb-Thirring constants

    Full text link
    Improved estimates on the constants Lγ,dL_{\gamma,d}, for 1/2<γ<3/21/2<\gamma<3/2, d∈Nd\in N in the inequalities for the eigenvalue moments of Schr\"{o}dinger operators are established

    Finite lifetime eigenfunctions of coupled systems of harmonic oscillators

    Full text link
    We find a Hermite-type basis for which the eigenvalue problem associated to the operator HA,B:=B(−∂x2)+Ax2H_{A,B}:=B(-\partial_x^2)+Ax^2 acting on L2(R;C2)L^2({\bf R};{\bf C}^2) becomes a three-terms recurrence. Here AA and BB are two constant positive definite matrices with no other restriction. Our main result provides an explicit characterization of the eigenvectors of HA,BH_{A,B} that lie in the span of the first four elements of this basis when AB≠BAAB\not= BA.Comment: 11 pages, 1 figure. Some typos where corrected in this new versio

    Many Particle Hardy-Inequalities

    Full text link
    In this paper we prove three differenttypes of the so-called many-particle Hardy inequalities. One of them is a "classical type" which is valid in any dimesnion d≠2d\neq 2. The second type deals with two-dimensional magnetic Dirichlet forms where every particle is supplied with a soplenoid. Finally we show that Hardy inequalities for Fermions hold true in all dimensions.Comment: 20 page

    Szegö Type Limit Theorems

    Get PDF

    Much Ado About Time: Exhaustive Annotation of Temporal Data

    Full text link
    Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to perceive. In contrast, we investigate and determine the most cost-effective way of obtaining high-quality multi-label annotations for temporal data such as videos. Watching even a short 30-second video clip requires a significant time investment from a crowd worker; thus, requesting multiple annotations following a single viewing is an important cost-saving strategy. But how many questions should we ask per video? We conclude that the optimal strategy is to ask as many questions as possible in a HIT (up to 52 binary questions after watching a 30-second video clip in our experiments). We demonstrate that while workers may not correctly answer all questions, the cost-benefit analysis nevertheless favors consensus from multiple such cheap-yet-imperfect iterations over more complex alternatives. When compared with a one-question-per-video baseline, our method is able to achieve a 10% improvement in recall 76.7% ours versus 66.7% baseline) at comparable precision (83.8% ours versus 83.0% baseline) in about half the annotation time (3.8 minutes ours compared to 7.1 minutes baseline). We demonstrate the effectiveness of our method by collecting multi-label annotations of 157 human activities on 1,815 videos.Comment: HCOMP 2016 Camera Read

    Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding

    Get PDF
    Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 seconds, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community
    • …
    corecore